perm filename CANNOT[S88,JMC] blob sn#859106 filedate 1988-06-30 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	cannot[s88,jmc]		Formalizing inability
C00004 ENDMK
CāŠ—;
cannot[s88,jmc]		Formalizing inability

One of the next problems for AI is to foramlize inability.  We need it
to formalize ability if prevention is one of the acts to be included.
The dogs and trash cans problem is an example.

The straightforward approach is to introduce a predicate  is-action
and circumscribe it.  Maybe it has to be  is-action(s)  or is-action(person,s)
if the person is not determined by context.  Something like this is
required.  The situation calculus axiom sets used so far do not allow
delimiting the set of possible actions.
Should we call it  doable(person,s)?

Of course, after circumscribing the actions, one has then to argue
that none of the known actions produces the effect in question.
Actually one has to prove that no sequence of actions does it or
no strategy does it or no strategy can be guaranteed to do it depending
on the notion of cannot that we want to formalize.  An interesting
question is whether one can work with a notion of cannot that is
ambiguous in general but is nonmonotonically presumed to be
unambiguous in a particular case unless there is contrary evidence.